We propose a new method for creating computationally efficient and compactconvolutional neural networks (CNNs) using a novel sparse connection structurethat resembles a tree root. This allows a significant reduction incomputational cost and number of parameters compared to state-of-the-art deepCNNs, without compromising accuracy, by exploiting the sparsity of inter-layerfilter dependencies. We validate our approach by using it to train moreefficient variants of state-of-the-art CNN architectures, evaluated on theCIFAR10 and ILSVRC datasets. Our results show similar or higher accuracy thanthe baseline architectures with much less computation, as measured by CPU andGPU timings. For example, for ResNet 50, our model has 40% fewer parameters,45% fewer floating point operations, and is 31% (12%) faster on a CPU (GPU).For the deeper ResNet 200 our model has 25% fewer floating point operations and44% fewer parameters, while maintaining state-of-the-art accuracy. ForGoogLeNet, our model has 7% fewer parameters and is 21% (16%) faster on a CPU(GPU).
展开▼